last pdf update generated ->  21 septembre, 2023 - 15:17:12

Introduction

A behavioral study of foraging

The data were acquired by JS, CG, and CREW at Oxford univ and at Inserm Lyon. In Lyon, Donut was tested at U1028 and Homer and Dali at U1208. The animals were tested everyday with different versions of the apparatus. The testing was done in animal housing using the 1st design realized by JS:

PST 2020 board

PST 2020 board

On this board the 25 locations are numbered from 1 to 25 from the upper left corner to the lower right corner, going from left to right.

Data loading and formating

Data files include all data tested in monkeys in Oxford and in Lyon.

This Markdown deals mostly with data from HOMER and DONUT.1

session types are initially “0” for control (transparent doors) and “1” TEST with opaque blue doors, but we use the label Clear vs BLue in figures and analyses. Codes used for target chosen and repeats or miss reflect the position of the choice (location of the hole selected from 1 TO 25 top to bottom) , if negative value then it’s a repeat, if 999 then it’s a pause in the task, if it’s a value between 100 and 2500 then it is a mistake (the animal tried but missed the reward) the number /100 giving the location.

Main general Plots

Let first see descriptive graphs for all sessions per monkey. The figures show for each monkey the choices (location of choice on board) selected by the animal trial by trial (chronological order from left to right). Green dots represent correct choices i.e. location chosen for the 1st time in session and with correct pick up of the reward. Blue dots represent returns to previously chosen location. These are presented as negative values to show their time course independently for correct choices. Orange is when reward is missed. Clear (transparent doors) and Blue (blue doors) sessions are presented separately First sample sessions:
General figure HOMER

General figure HOMER

General figure HOMER

General figure HOMER

Then all data overlap across sessions to see the tendencies. In particular one can see the positive and negative trends that reflect monkeys choosing holes from top to bottom or bottom to top.Homer and Donut have different prefered directions but this is due to the different position of the setup in the housing.

General figure DONUT

General figure DONUT

General figure DONUT

General figure DONUT

Summary

The data are then summarized in terms of frequency of each trial (choice) type: Correct, Miss, Repeat.

Summary all trials

Summary all trials

Below the average number of each trial type for the different ‘portes’ (door) conditions.

We do not look at injections yet, this will be done later in the statistical anayses (DCZ vs Sham). See for all 8 monkeys or for Donut and Homer that there is a main effect of doors in particular on the number of repeats. Which makes sense because monkeys in blue condition (compared to clear) monkeys need to rely on memory to avoid repeating ; which obviously they don’t really succeed. We see later the #repeat is a very relevant parameter.

##Stats We perform a logistic regression on the door effect for each monkey separately and test whether it influences the number of trial types - Still excluding DCZ sessions

## Warning: Dans subset.data.frame(agg.data4BnoDCZ, singe = "Homer") :
## l'argument supplémentaire 'singe' sera ignoré

## Warning: Dans subset.data.frame(agg.data4BnoDCZ, singe = "Homer") :
## l'argument supplémentaire 'singe' sera ignoré
## 
## Call:
## glm(formula = trial ~ choice_type * portes, family = "poisson", 
##     data = subset(agg.data4BnoDCZ, singe = "Homer"))
## 
## Coefficients:
##                               Estimate Std. Error z value Pr(>|z|)    
## (Intercept)                    3.02732    0.01501 201.675   <2e-16 ***
## choice_typeMiss               -1.26114    0.03300 -38.218   <2e-16 ***
## choice_typeRepeat              0.39703    0.01947  20.395   <2e-16 ***
## portesclear                    0.04977    0.02806   1.773   0.0761 .  
## choice_typeMiss:portesclear   -0.66399    0.07733  -8.586   <2e-16 ***
## choice_typeRepeat:portesclear -1.60783    0.08040 -19.997   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 9988.4  on 807  degrees of freedom
## Residual deviance: 4442.9  on 802  degrees of freedom
## AIC: 7906.4
## 
## Number of Fisher Scoring iterations: 5
## Warning: Dans subset.data.frame(agg.data4BnoDCZ, singe = "Donut") :
## l'argument supplémentaire 'singe' sera ignoré
## Warning: Dans subset.data.frame(agg.data4BnoDCZ, singe = "Donut") :
## l'argument supplémentaire 'singe' sera ignoré
## 
## Call:
## glm(formula = trial ~ choice_type * portes, family = "poisson", 
##     data = subset(agg.data4BnoDCZ, singe = "Donut"))
## 
## Coefficients:
##                               Estimate Std. Error z value Pr(>|z|)    
## (Intercept)                    3.02732    0.01501 201.675   <2e-16 ***
## choice_typeMiss               -1.26114    0.03300 -38.218   <2e-16 ***
## choice_typeRepeat              0.39703    0.01947  20.395   <2e-16 ***
## portesclear                    0.04977    0.02806   1.773   0.0761 .  
## choice_typeMiss:portesclear   -0.66399    0.07733  -8.586   <2e-16 ***
## choice_typeRepeat:portesclear -1.60783    0.08040 -19.997   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 9988.4  on 807  degrees of freedom
## Residual deviance: 4442.9  on 802  degrees of freedom
## AIC: 7906.4
## 
## Number of Fisher Scoring iterations: 5

##Exploration strategies

One future interesting set of analyses we can do regards the strategies of exploration: how animals scan through the setup , and then of course how they forget and repeat choices. Needs to be quantified to be used when comparing ON/OFF DCZ sessions.

One thing we can look at is the spatial variance between successive choices (here I do not differentiate Miss, Correct or repeat).

The figures below show the distributions of euclidean distances between 2 successive choices. The absolute distance between 2 holes (vertically and horizontally is 6.5cm). So we can see harmonics at 6.5 cm approximately in the distributions. Something quite obvious is that the harmonic is stronger in TEST than in CONTROL.

Note that here every choice is counted even repeats.

//////////// ATTENTION: here we will select only sessions that are labelled no, sham or DCZ (i.e. >=23 for Homer and >=24 for Donut ) ///////////

The distribution of distances between choices is of a particular form, somewhat LogNormal. We can look at this distributions depending on conditions and also compare with a random sampling of distances. Let’s first look at this accross the 2 monkeys.

The red shows distributions for monkeys, and blue shows a random sampling of 10000 distances.

Seperated for the 2 animals for Clear and Blue conditions we can see (below) that the distributions and the oscillation effects are stronger on Blue compared to Clear.

Spatial strategy. Distributions of euclidean distances

Spatial strategy. Distributions of euclidean distances

We test the difference in distributions between clear and blue for the 2 animals separately - we exclude DCZ sessions:

## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Homer" & data4B$portes == "clear" & data4B$Injection != "DCZ"] and data4B$distloc[data4B$singe == "Homer" & data4B$portes == "blue" & data4B$Injection != "DCZ"]
## D = 0.11569, p-value = 0.01824
## alternative hypothesis: two-sided
## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Donut" & data4B$portes == "clear" & data4B$Injection != "DCZ"] and data4B$distloc[data4B$singe == "Donut" & data4B$portes == "blue" & data4B$Injection != "DCZ"]
## D = 0.23403, p-value < 2.2e-16
## alternative hypothesis: two-sided

The KS tests indicate a difference between the 2 distributions (Clear and Blue) for both monkeys.

The strength of the ‘harmonic’ could be a marker of the two strategies used in the different sessions. The heavier harmonics in TEST could reflect an increased number of jumps between distant targets, whereas in CONTROL the animal would be more attracted to the visible reward which is just closeby to the current choice, hence proportionally more cases in which the animal choose the target just next the current one (6.5 cm distance). But we would need more control sessions to be sure.

Subset of trials

One observation is that monkeys often behave in a somewhat more controlled, organized, manner at the beginning of a session and then choices become more dispersed. This could correlate approximately with the completion of the task (i.e. having gone through all locations). So here we seperate the first 25 from the oher trials in 2 subset (25 corresponding to the number of locations on the setup).

Again here we remove DCZ (DCZ is taken into account in statistical analyses below)

There is obviously a lot of repeats (checks?) after 25 when the animals continue trying to get rewards, and of course especially in the blue sessions. And Donut is a particularly good checker…

##The summary:

Below the average number of each trial type for the different conditions of injection (here again for the first 25 trials) :

Injection type 25 trials

Injection type 25 trials

##Spatial strategy:

Spatial organization in 2D space

The tendencies for choosing some locations rather than others can reveal spatial biases. SO we use here a 2D density mapping of choices to look at that.

Let’s look also at the patterns of choices separating between before and after 25:

STATISTICAL ANALYSES on DCZ vs Sham sessions

(Note we have no injection pre-surgery )

Here are the analyses and description of data for the 2 main types of sessions used on DCZ conditions. We subset the data for just the 2 monkeys and the 2 session types with an injection (sham and DCZ). We will also go through some more measures:

ATTENTION we remove the CONTROL sessions!!!!

## , ,  = Homer
## 
##        
##         sham DCZ
##   clear    6   8
##   blue     6   9
## 
## , ,  = Donut
## 
##        
##         sham DCZ
##   clear    7  10
##   blue    10  11

##Descriptions of sessions

Here we answer a few general questions on the sessions, choices, repeats etc..

First, were there more trials (choices+misses+repeats) in DCZ compared to sham sessions?

Summary 25 trials

Summary 25 trials

## 
## Call:
## glm(formula = trial ~ Injection * portes, family = "poisson", 
##     data = subset(agg.data4B, singe == "Homer"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              3.12090    0.08575  36.396   <2e-16 ***
## InjectionDCZ            -0.16990    0.11785  -1.442   0.1494    
## portesblue               0.18110    0.11614   1.559   0.1189    
## InjectionDCZ:portesblue  0.29830    0.15369   1.941   0.0523 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 98.374  on 28  degrees of freedom
## Residual deviance: 72.329  on 25  degrees of freedom
## AIC: 225.62
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial ~ Injection * portes, family = "poisson", 
##     data = subset(agg.data4B, singe == "Donut"))
## 
## Coefficients:
##                          Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              3.347395   0.070888  47.221   <2e-16 ***
## InjectionDCZ            -0.001006   0.092446  -0.011    0.991    
## portesblue               0.909635   0.080259  11.334   <2e-16 ***
## InjectionDCZ:portesblue  0.149585   0.105226   1.422    0.155    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 507.290  on 37  degrees of freedom
## Residual deviance:  75.095  on 34  degrees of freedom
## AIC: 300.41
## 
## Number of Fisher Scoring iterations: 4
Summary 25 trials

Summary 25 trials

Summary 25 trials

Summary 25 trials

No, apparently no main effect for Injection on the length of sessions. We have a significant interaction for Homer, suggesting a lower number of trials in DCZ (shorter sessions) in the blue door condition.

Then we ask whether the last correct trial performed is later in DCZ than in sham: this woul dmean that monkeys have more problems, or take more time, to find all or the max of rewards.

## 
## Call:
## glm(formula = trial_nb ~ Injection * portes, family = "poisson", 
##     data = subset(agg.maxcor, singe == "Homer"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              3.13549    0.08513  36.834   <2e-16 ***
## InjectionDCZ            -0.23133    0.11873  -1.948   0.0514 .  
## portesblue               0.08338    0.11795   0.707   0.4796    
## InjectionDCZ:portesblue  0.34862    0.15721   2.218   0.0266 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 86.387  on 28  degrees of freedom
## Residual deviance: 67.975  on 25  degrees of freedom
## AIC: 219.62
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial_nb ~ Injection * portes, family = "poisson", 
##     data = subset(agg.maxcor, singe == "Donut"))
## 
## Coefficients:
##                          Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              3.352407   0.070711  47.410   <2e-16 ***
## InjectionDCZ            -0.006018   0.092310  -0.065   0.9480    
## portesblue               0.731887   0.081753   8.952   <2e-16 ***
## InjectionDCZ:portesblue  0.212183   0.107004   1.983   0.0474 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 388.396  on 37  degrees of freedom
## Residual deviance:  83.286  on 34  degrees of freedom
## AIC: 305.58
## 
## Number of Fisher Scoring iterations: 4

Here we have an interaction for Homer and Donut (although in different directions): - Homer , an earlier last correct under DCZ trial than in sham session and clear session too maybe. - Donut , slightly later last correct in blue DCZ compared to sham.

This might mean that trials are instead repeats (since we have the same number of trials in sessions). So let see let’s do the same analysis for Repeats, and then analyze the Repeats altogether

In the first analysis we take only the “blue” sessions because we have very few repeats in “clear”:

## 
## Call:
## glm(formula = trial_nb ~ Injection, family = "poisson", data = subset(agg.maxrpt, 
##     singe == "Homer"))
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   3.28964    0.07881  41.741   <2e-16 ***
## InjectionDCZ  0.10784    0.09964   1.082    0.279    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 76.426  on 14  degrees of freedom
## Residual deviance: 75.245  on 13  degrees of freedom
## AIC: 155.95
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial_nb ~ Injection, family = "poisson", data = subset(agg.maxrpt, 
##     singe == "Donut"))
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   4.27110    0.03737 114.287  < 2e-16 ***
## InjectionDCZ  0.15755    0.04981   3.163  0.00156 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 86.851  on 20  degrees of freedom
## Residual deviance: 76.790  on 19  degrees of freedom
## AIC: 210.32
## 
## Number of Fisher Scoring iterations: 4

Main effect for both monkeys although, again, in opposite direction; in other words the rank of the last repeat trial differ in DCZ and sham: earlier for Homer, and later for Donut.

Below we can graph the overall trends of choices across sessions. The lines represent the fit to choice patterns (as shown in the very first figures). Positive slopes means searching holes from top to bottom, and negative slopes from bottom to top. The length actually covers the number of trials.

There are changes regarding repeats but these are different for the 2 monkeys. Let’s test statistically the data for All trials, for <25 trials and for >25 trials:

Summary 25 trials

Summary 25 trials

Summary 25 trials

Summary 25 trials

Trial types:

First let’s look at the frequency of repeats, miss, etc…

## 
## Call:
## glm(formula = trial ~ choice_type/Injection, family = "poisson", 
##     data = subset(agg.data4B, singe == "Homer"))
## 
## Coefficients:
##                                 Estimate Std. Error z value Pr(>|z|)    
## (Intercept)                      2.80840    0.07089  39.617  < 2e-16 ***
## choice_typeMiss                 -2.06646    0.22944  -9.006  < 2e-16 ***
## choice_typeRepeat               -0.51839    0.13298  -3.898 9.69e-05 ***
## choice_typeCorrect:InjectionDCZ  0.01438    0.09231   0.156    0.876    
## choice_typeMiss:InjectionDCZ     0.10536    0.27603   0.382    0.703    
## choice_typeRepeat:InjectionDCZ   0.10789    0.14748   0.732    0.464    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 467.49  on 71  degrees of freedom
## Residual deviance: 137.20  on 66  degrees of freedom
## AIC: 419.96
## 
## Number of Fisher Scoring iterations: 5
## 
## Call:
## glm(formula = trial ~ choice_type/Injection, family = "poisson", 
##     data = subset(agg.data4B, singe == "Donut"))
## 
## Coefficients:
##                                 Estimate Std. Error z value Pr(>|z|)    
## (Intercept)                      3.15072    0.05019  62.778  < 2e-16 ***
## choice_typeMiss                 -0.91268    0.09584  -9.523  < 2e-16 ***
## choice_typeRepeat                0.16486    0.07288   2.262   0.0237 *  
## choice_typeCorrect:InjectionDCZ  0.04112    0.06690   0.615   0.5388    
## choice_typeMiss:InjectionDCZ    -0.29214    0.11751  -2.486   0.0129 *  
## choice_typeRepeat:InjectionDCZ   0.47991    0.06831   7.026 2.13e-12 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 1126.65  on 98  degrees of freedom
## Residual deviance:  467.27  on 93  degrees of freedom
## AIC: 928.27
## 
## Number of Fisher Scoring iterations: 5

## 
## Call:
## glm(formula = trial ~ Injection * portes, family = "poisson", 
##     data = subset(agg.data4B.first30, singe == "Homer" & choice_type == 
##         "Repeat"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)               1.3863     0.3536   3.921 8.82e-05 ***
## InjectionDCZ              0.5596     0.5175   1.081    0.280    
## portesblue                0.8473     0.3780   2.242    0.025 *  
## InjectionDCZ:portesblue  -0.8633     0.5494  -1.571    0.116    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 27.507  on 17  degrees of freedom
## Residual deviance: 20.606  on 14  degrees of freedom
## AIC: 95.941
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial ~ Injection * portes, family = "poisson", 
##     data = subset(agg.data4B.first30, singe == "Donut" & choice_type == 
##         "Repeat"))
## 
## Coefficients: (1 not defined because of singularities)
##                         Estimate Std. Error z value Pr(>|z|)
## (Intercept)               0.4642     0.7326   0.634    0.526
## InjectionDCZ              0.2289     0.1915   1.196    0.232
## portesblue                1.0833     0.7179   1.509    0.131
## InjectionDCZ:portesblue       NA         NA      NA       NA
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 22.709  on 21  degrees of freedom
## Residual deviance: 18.617  on 19  degrees of freedom
## AIC: 99.579
## 
## Number of Fisher Scoring iterations: 4

## 
## Call:
## glm(formula = trial ~ Injection * portes, family = "poisson", 
##     data = subset(agg.data4B.after30, singe == "Homer" & choice_type == 
##         "Repeat"))
## 
## Coefficients: (1 not defined because of singularities)
##                           Estimate Std. Error z value Pr(>|z|)  
## (Intercept)             -2.783e-15  1.000e+00   0.000   1.0000  
## InjectionDCZ             2.456e-15  3.203e-01   0.000   1.0000  
## portesblue               2.565e+00  1.038e+00   2.472   0.0134 *
## InjectionDCZ:portesblue         NA         NA      NA       NA  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 35.804  on 4  degrees of freedom
## Residual deviance: 19.299  on 2  degrees of freedom
## AIC: 43.84
## 
## Number of Fisher Scoring iterations: 5
## 
## Call:
## glm(formula = trial ~ Injection * portes, family = "poisson", 
##     data = subset(agg.data4B.after30, singe == "Donut" & choice_type == 
##         "Repeat"))
## 
## Coefficients:
##                           Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              6.931e-01  4.082e-01   1.698   0.0895 .  
## InjectionDCZ            -9.471e-15  8.165e-01   0.000   1.0000    
## portesblue               2.711e+00  4.123e-01   6.576 4.82e-11 ***
## InjectionDCZ:portesblue  3.223e-01  8.199e-01   0.393   0.6943    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 375.79  on 24  degrees of freedom
## Residual deviance: 151.86  on 21  degrees of freedom
## AIC: 280.09
## 
## Number of Fisher Scoring iterations: 5

The statistics show that there is no effect of the condition (Injection) for Homer but there is a significant increase of repeat for Donut. This is when we do not take PORTES into account. If Portes is used as an interacting fixed effect the effect size for Repeats in Donut goes down (p=0.052).

Cumsum Reward

Test whether the speed of getting rewards changes between conditions and under DCZ vs sham.

## 
##  Exact two-sample Kolmogorov-Smirnov test
## 
## data:  cum$cv[cum$portes == "blue" & cum$singe == "Homer" & cum$Injection == "sham"] and cum$cv[cum$portes == "blue" & cum$singe == "Homer" & cum$Injection == "DCZ"]
## D = 0.29128, p-value = 0.01547
## alternative hypothesis: two-sided
## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  cum$cv[cum$portes == "blue" & cum$singe == "Donut" & cum$Injection == "sham"] and cum$cv[cum$portes == "blue" & cum$singe == "Donut" & cum$Injection == "DCZ"]
## D = 0.21, p-value = 0.02431
## alternative hypothesis: two-sided

## 
##  Exact two-sample Kolmogorov-Smirnov test
## 
## data:  cum30$cv[cum30$portes == "blue" & cum30$singe == "Homer" & cum30$Injection == "sham"] and cum30$cv[cum30$portes == "blue" & cum30$singe == "Homer" & cum30$Injection == "DCZ"]
## D = 0.23673, p-value = 0.08089
## alternative hypothesis: two-sided
## 
##  Exact two-sample Kolmogorov-Smirnov test
## 
## data:  cum30$cv[cum30$portes == "blue" & cum30$singe == "Donut" & cum30$Injection == "sham"] and cum30$cv[cum30$portes == "blue" & cum30$singe == "Donut" & cum30$Injection == "DCZ"]
## D = 0.1, p-value = 0.9654
## alternative hypothesis: two-sided

There is an impact of DCZ on the speed at which animals get rewards under DCZ compared to Sham, in the opaque condition, not in the clear. Specifically it seems that animals accumulate rewards faster under DCZ.IT is not significant if one takes only the first 50 trials for the 2 animals

One possibility is that it’s because animals make less repeats at the begining thus cumulating rewards faster.

Cumsum repeats

Test whether the repeats appear later using a cumsum curve

## 
##  Exact two-sample Kolmogorov-Smirnov test
## 
## data:  cumdr30$cumrepeats[cumdr30$portes == "blue" & cumdr30$singe == "Homer" & cumdr30$Injection == "sham"] and cumdr30$cumrepeats[cumdr30$portes == "blue" & cumdr30$singe == "Homer" & cumdr30$Injection == "DCZ"]
## D = 0.10653, p-value = 0.8789
## alternative hypothesis: two-sided
## 
##  Exact two-sample Kolmogorov-Smirnov test
## 
## data:  cumdr30$cumrepeats[cumdr30$portes == "blue" & cumdr30$singe == "Donut" & cumdr30$Injection == "sham"] and cumdr30$cumrepeats[cumdr30$portes == "blue" & cumdr30$singe == "Donut" & cumdr30$Injection == "DCZ"]
## D = 0.1, p-value = 0.9589
## alternative hypothesis: two-sided

No different.

##stats spatial strategy

## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Homer" & data4B$portes == "clear" & data4B$Injection == "DCZ"] and data4B$distloc[data4B$singe == "Homer" & data4B$portes == "clear" & data4B$Injection == "sham"]
## D = 0.040033, p-value = 0.9998
## alternative hypothesis: two-sided
## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Homer" & data4B$portes == "blue" & data4B$Injection == "DCZ"] and data4B$distloc[data4B$singe == "Homer" & data4B$portes == "blue" & data4B$Injection == "sham"]
## D = 0.035353, p-value = 0.9995
## alternative hypothesis: two-sided
## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Donut" & data4B$portes == "clear" & data4B$Injection == "DCZ"] and data4B$distloc[data4B$singe == "Donut" & data4B$portes == "clear" & data4B$Injection == "sham"]
## D = 0.075606, p-value = 0.5193
## alternative hypothesis: two-sided
## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Donut" & data4B$portes == "blue" & data4B$Injection == "DCZ"] and data4B$distloc[data4B$singe == "Donut" & data4B$portes == "blue" & data4B$Injection == "sham"]
## D = 0.012805, p-value = 1
## alternative hypothesis: two-sided

There is no difference in distribution of distances between sham and DCZ.

Clusters of small distances

We look at clustering in the sense of succession of choices at small distances.

## 
## Call:
## lm(formula = normdist ~ portes.x * Injection.x, data = subset(newdata2, 
##     singe.x == "Donut"))
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.79702 -0.13816  0.00995  0.18219  0.59248 
## 
## Coefficients:
##                             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                  1.49680    0.11744  12.746 1.67e-14 ***
## portes.xblue                 0.13707    0.15312   0.895    0.377    
## Injection.xDCZ               0.16689    0.15312   1.090    0.283    
## portes.xblue:Injection.xDCZ -0.09747    0.20463  -0.476    0.637    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.3107 on 34 degrees of freedom
## Multiple R-squared:  0.05557,    Adjusted R-squared:  -0.02776 
## F-statistic: 0.6668 on 3 and 34 DF,  p-value: 0.5782
## 
## Call:
## lm(formula = normdist ~ portes.x * Injection.x, data = subset(newdata2, 
##     singe.x == "Homer"))
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.83539 -0.15426  0.01198  0.22810  0.58640 
## 
## Coefficients:
##                             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                  1.51960    0.14472  10.500 1.19e-10 ***
## portes.xblue                 0.13818    0.20466   0.675    0.506    
## Injection.xDCZ               0.06066    0.19145   0.317    0.754    
## portes.xblue:Injection.xDCZ -0.03478    0.26750  -0.130    0.898    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.3545 on 25 degrees of freedom
## Multiple R-squared:  0.0361, Adjusted R-squared:  -0.07957 
## F-statistic: 0.3121 on 3 and 25 DF,  p-value: 0.8164

## 
## Call:
## glm(formula = clust ~ portes * Injection, family = "poisson", 
##     data = subset(distClustyClust.session, singe == "Donut"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              0.93048    0.23736   3.920 8.85e-05 ***
## portesblue               0.24834    0.29513   0.841    0.400    
## InjectionDCZ             0.04319    0.30677   0.141    0.888    
## portesblue:InjectionDCZ -0.06184    0.39162  -0.158    0.875    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 5.3423  on 37  degrees of freedom
## Residual deviance: 4.0878  on 34  degrees of freedom
## AIC: Inf
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = clust ~ portes * Injection, family = "poisson", 
##     data = subset(distClustyClust.session, singe == "Homer"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)   
## (Intercept)              0.85205    0.26663   3.196   0.0014 **
## portesblue               0.16344    0.36258   0.451   0.6521   
## InjectionDCZ             0.08893    0.34622   0.257   0.7973   
## portesblue:InjectionDCZ -0.15013    0.47226  -0.318   0.7506   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 2.4731  on 28  degrees of freedom
## Residual deviance: 2.2651  on 25  degrees of freedom
## AIC: Inf
## 
## Number of Fisher Scoring iterations: 4

## 
## Call:
## lm(formula = normclust ~ portes.x * Injection.x, data = subset(newdata2, 
##     singe.x == "Donut"))
## 
## Residuals:
##       Min        1Q    Median        3Q       Max 
## -0.089494 -0.018065  0.001053  0.012574  0.133817 
## 
## Coefficients:
##                             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                  0.10861    0.01621   6.699 1.08e-07 ***
## portes.xblue                 0.05384    0.02114   2.547   0.0156 *  
## Injection.xDCZ               0.01660    0.02114   0.785   0.4377    
## portes.xblue:Injection.xDCZ -0.02010    0.02825  -0.712   0.4816    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.04289 on 34 degrees of freedom
## Multiple R-squared:  0.2231, Adjusted R-squared:  0.1545 
## F-statistic: 3.254 on 3 and 34 DF,  p-value: 0.03349
## 
## Call:
## lm(formula = normclust ~ portes.x * Injection.x, data = subset(newdata2, 
##     singe.x == "Homer"))
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.07266 -0.03549  0.01088  0.02618  0.07707 
## 
## Coefficients:
##                              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                  0.150206   0.019347   7.764 4.04e-08 ***
## portes.xblue                 0.023614   0.027361   0.863    0.396    
## Injection.xDCZ              -0.027756   0.025594  -1.084    0.288    
## portes.xblue:Injection.xDCZ  0.006594   0.035762   0.184    0.855    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.04739 on 25 degrees of freedom
## Multiple R-squared:  0.144,  Adjusted R-squared:  0.04129 
## F-statistic: 1.402 on 3 and 25 DF,  p-value: 0.2655

After counting and normalizing we don’t find any difference in the clustering of distances between choices: no difference in the number of choices Hence the strategy of the animal, in terms of clustering choices with small distances, don’t seem to differ.

Note that surprisingly only Donut shows the effect in which the number of cluster with short distances is larger in Blue than in Clear conditions. Homer does not seem to change his strategy in that regard between the 2…

However, this is surprising because analyses and figures above suggest that under rDCZ in opaque animals get rewards faster and that it could be accompanied by more short distances between 2 successive choices. !! NEED to investigate further!!

Cumsum distance - trajectory

Test whether the speed of getting rewards changes between conditions and under DCZ vs sham.

Post-error reaction vs Reward

Test whether the choices made after no rewards (e.g. next distance of choice: close or far). We hypothesize that in the blue condition (the animal doesn’t see rewards) the distance after negative outcomes (for a repeat) is larger than after a correct rewarded response, because when rewarded the animal will stay ‘in the patch’ i.e. close to where he got the reward.

We remove Clear conditions, and misses because it’s not appropriate.

## 
## Call:
## lm(formula = nextdist ~ choice_type * Injection, data = mean.nextdist)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.1743 -1.5799 -0.4643  1.0772 10.7234 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                      9.7099     0.5243  18.519  < 2e-16 ***
## choice_typeRepeat                1.9644     0.7348   2.673  0.00854 ** 
## InjectionDCZ                     0.5422     0.6978   0.777  0.43868    
## choice_typeRepeat:InjectionDCZ  -0.1899     0.9789  -0.194  0.84651    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.724 on 122 degrees of freedom
## Multiple R-squared:  0.1127, Adjusted R-squared:  0.09084 
## F-statistic: 5.163 on 3 and 122 DF,  p-value: 0.00215
## 
## Call:
## lm(formula = nextdist ~ choice_type, data = subset(mean.nextdist, 
##     singe == "Donut" & Injection == "sham"))
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.0988 -1.3195 -0.3308  0.6776  7.9012 
## 
## Coefficients:
##                   Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         9.4888     0.5628  16.861   <2e-16 ***
## choice_typeRepeat   2.1100     0.7959   2.651   0.0116 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.517 on 38 degrees of freedom
## Multiple R-squared:  0.1561, Adjusted R-squared:  0.1339 
## F-statistic: 7.028 on 1 and 38 DF,  p-value: 0.01163
## 
## Call:
## lm(formula = nextdist ~ choice_type, data = subset(mean.nextdist, 
##     singe == "Homer" & Injection == "sham"))
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.8004 -1.0319 -0.1918  1.1597  2.8950 
## 
## Coefficients:
##                   Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        10.3416     0.6621  15.620 8.37e-10 ***
## choice_typeRepeat   1.5214     0.9066   1.678    0.117    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.752 on 13 degrees of freedom
## Multiple R-squared:  0.1781, Adjusted R-squared:  0.1148 
## F-statistic: 2.816 on 1 and 13 DF,  p-value: 0.1172

We observe a post-error effect: after a repeat (negative outcome) the next choice is further away than after a rewarded choice.The effect is however not significant for Homer, and there is no DCZ effect.

Other option is to check whether the probability to do a short vs. long shift depends on the previous reward, using logistic regressions.

Now how does this work with successive trials : does the cumulative outcome (e.g. average outcome in 5 last choices) impact the distance of leave after a negative outcome? The hypothesis is that there should be a threshold to “leave the patch” in terms of average reward encountered. We don’t look first at average values but at successions of rwd: ..010, .0110, 01110, 11110

## 
## Call:
## lm(formula = distNeg ~ Value * Injection, data = subset(MeanValueNeg.nextdist, 
##     singe == "Donut" & Feedback == "Negative"))
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.8347 -1.9579 -0.3966  1.2683 15.2023 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)          9.7183     0.8168  11.898  < 2e-16 ***
## Value                3.2705     1.7683   1.850  0.06725 .  
## InjectionDCZ         2.1429     1.1124   1.926  0.05682 .  
## Value:InjectionDCZ  -7.5246     2.3665  -3.180  0.00195 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.258 on 103 degrees of freedom
## Multiple R-squared:  0.1064, Adjusted R-squared:  0.0804 
## F-statistic: 4.089 on 3 and 103 DF,  p-value: 0.008701
## 
## Call:
## lm(formula = distNeg ~ Value * Injection, data = subset(MeanValueNeg.nextdist, 
##     singe == "Homer" & Feedback == "Negative"))
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -10.0033  -3.5172  -0.1011   1.5623  19.9565 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)          14.748      2.246   6.568 1.45e-08 ***
## Value                -7.336      4.302  -1.705   0.0934 .  
## InjectionDCZ         -4.334      3.058  -1.417   0.1617    
## Value:InjectionDCZ   10.885      5.675   1.918   0.0600 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.728 on 59 degrees of freedom
## Multiple R-squared:  0.06487,    Adjusted R-squared:  0.01732 
## F-statistic: 1.364 on 3 and 59 DF,  p-value: 0.2625

Distance to repeat

## 
## Call:
## glm(formula = d2rpt ~ Injection * portes, family = "poisson", 
##     data = subset(stats.repeat, singe == "Homer"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)               1.6977     0.1187  14.305  < 2e-16 ***
## InjectionDCZ             -0.9305     0.2231  -4.170 3.05e-05 ***
## portesblue                0.4071     0.1260   3.231  0.00123 ** 
## InjectionDCZ:portesblue   1.2262     0.2291   5.353 8.67e-08 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 1206.0  on 194  degrees of freedom
## Residual deviance: 1039.2  on 191  degrees of freedom
## AIC: 1745.5
## 
## Number of Fisher Scoring iterations: 5
## 
## Call:
## glm(formula = d2rpt ~ Injection * portes, family = "poisson", 
##     data = subset(stats.repeat, singe == "Donut"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              2.11626    0.10976  19.280  < 2e-16 ***
## InjectionDCZ            -0.06396    0.14568  -0.439    0.661    
## portesblue               0.60673    0.11062   5.485 4.13e-08 ***
## InjectionDCZ:portesblue  0.10458    0.14673   0.713    0.476    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 6066.4  on 897  degrees of freedom
## Residual deviance: 5954.8  on 894  degrees of freedom
## AIC: 9830.7
## 
## Number of Fisher Scoring iterations: 6
## 
##   Simultaneous Tests for General Linear Hypotheses
## 
## Multiple Comparisons of Means: Tukey Contrasts
## 
## 
## Fit: glm(formula = d2rpt ~ -1 + BV, family = "poisson", data = subset(d, 
##     singe == "Homer"))
## 
## Linear Hypotheses:
##                             Estimate Std. Error z value Pr(>|z|)    
## DCZ.clear - sham.clear == 0 -0.93048    0.22314  -4.170  < 0.001 ***
## sham.blue - sham.clear == 0  0.40712    0.12600   3.231  0.00548 ** 
## DCZ.blue - sham.clear == 0   0.70286    0.12240   5.742  < 0.001 ***
## sham.blue - DCZ.clear == 0   1.33760    0.19365   6.907  < 0.001 ***
## DCZ.blue - DCZ.clear == 0    1.63334    0.19132   8.537  < 0.001 ***
## DCZ.blue - sham.blue == 0    0.29574    0.05186   5.702  < 0.001 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)
## 
##   Simultaneous Tests for General Linear Hypotheses
## 
## Multiple Comparisons of Means: Tukey Contrasts
## 
## 
## Fit: glm(formula = d2rpt ~ -1 + BV, family = "poisson", data = subset(d, 
##     singe == "Donut"))
## 
## Linear Hypotheses:
##                             Estimate Std. Error z value Pr(>|z|)    
## DCZ.clear - sham.clear == 0 -0.06396    0.14568  -0.439   0.9661    
## sham.blue - sham.clear == 0  0.60673    0.11062   5.485   <1e-04 ***
## DCZ.blue - sham.clear == 0   0.64735    0.11031   5.868   <1e-04 ***
## sham.blue - DCZ.clear == 0   0.67070    0.09676   6.932   <1e-04 ***
## DCZ.blue - DCZ.clear == 0    0.71131    0.09641   7.378   <1e-04 ***
## DCZ.blue - sham.blue == 0    0.04062    0.01755   2.314   0.0757 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)

Regarding distances between a choice and a repeat, DCZ effects are absent in Donut but present in Homer. For Homer, the distances are longer under DCZ in blue conditions but much shorter in DCZ than sham in clear conditions.

2D choices


  1. The first descriptive analyses were in Graph_PST2020choices.R - now in PST2020_DREADDs.rmd]↩︎